Open source, standard tooling for experimental protocols: towards Registered reports
Old school:
Rubin (2017)
What if… -> Garden of forking paths
p-hacking
The need for significance and novelty
False positives research
RRs were conceived to alter the incentives for authors and journals away from producing novel, positive, clean findings and towards conducting and publishing rigorous research on important questions. Soderberg et al. (2021)
More open, preregistered, reproducible by default
It does not matter if p value is < 0.05
Less incentives for p-hacking
More trustworthy results
We (CSCN; ~5-10 PI’s) used different technologies to develop experiments: Psychopy, Qualtrics, Limesurvey, jsPsych, etc.
Each of these has advantages and disadvantages.
Mostly, pragmatic aspects guided the decision: lab history and resources, coding experience, type of experiment (EEG/behavioral, lab/online), …
Each protocol started almost from scratch. Sometimes a single task would define the technology used.
At some point, we had multiple implementations of the same tasks in different technologies, not always exact replicas.
Some would work in certain computers, other did not. Output data wildly different.
2 questions voluntary survey:
Initial idea: Gorka
Current developers: Gorka Navarrete y Herman Valencia
Initial development: @nicomero, @Fethrblaka, @nik0lai
Discussions, ideas, testing:
Esteban Hurtado
Alvaro Rivera
Juan Pablo Morales
jsPsychR is a group of open source tools to help create experimental paradigms with jsPsych, simulate participants and standardize the data preparation and analysis.
We aim to have a big catalog of tasks available to use in the jsPsychMaker repo. Each of the tasks should run with jspsychMonkeys to create virtual participants. And each task will have a sister script in jsPsychHelpeR to fully automate data preparation (re-coding, reversing items, calculating dimensions, etc.).
The final goal is to help you have the data preparation and analysis ready before collecting any real data, drastically reducing errors in your protocols, and making the move towards registered reports easier.
Create a protocol with three existing tasks:
Release a single Monkey!:
Release a horde of Monkeys in parallel:
Create project for data preparation:
Create protocol, simulate participants and prepare data…
Let’s try to download the data, process it and show a report with the results:
# Open ExperimentIssues project
rstudioapi::openProject("../Survey/jsPsychHelpeR-ExperimentIssues/jsPsychHelpeR-ExperimentIssues.Rproj", newSession = TRUE)
# If something fails, we always have the monkeys!
browseURL("../Survey/jsPsychHelpeR-ExperimentIssues/outputs/reports/report_analysis_monkeys.html")Very easy to create new scales, and simple tasks, but complex experimental tasks require javascript and HTML knowledge (although there are a good number of examples available)
Data preparation for new tasks requires expertise in R
Requires access to a server for online tasks
Lots of things to do:
Experimental tasks
More tasks, more translations
So far, development based in our needs
Upgrade to jsPsych v8 when available
Improve, clean, …
With jsPsychR protocols are standardized and with (mostly) clean code. Also, less errors!
Data preparation is 90% automatic, standardized, and beautiful
Super easy to work on analyis before collecting human data
Much easier to write up a good analysis plan
Sharing protocol, materials, data preparation is trivial (single command)
Creating future proof full projects (with Docker) is one command away
Javascript programmers
R programmers
Testers
Task creators
Gorka Navarrete
gorkang@gmail.com
https://fosstodon.org/@gorkang